17 research outputs found

    Approaching simulation to modelers: a user interface for large-scale demographic simulation

    Get PDF
    Extended version / Versió extesaAgent-based modeling is one of the promising modeling tools that can be used in the study of population dynamics. Two of the main obstacles hindering the use of agent-based simulation in practice are its scalability when the analysis requires large-scale models such as policy studies, and its ease-of-use especially for users with no programming experience. While there has been a significant work on the scalability issue, ease-of-use aspect has not been addressed in the same intensity. This paper presents a graphical user interface designed for a simulation tool which allows modelers with no programming background to specify agent-based demographic models and run them on parallel environments. The interface eases the definition of models to describe individual and group dynamics processes with both qualitative and quantitative data. The main advantage is to allow users to transparently run the models on high performance computing infrastructures.Postprint (author's final draft

    Enhancing model quality and scalability for mining business processes with invisible tasks in non-free choice

    No full text
    At present, business processes are growing rapidly, resulting in various types of activity relationships and big event logs. Discovering invisible tasks and invisible tasks in non-free choice is challenging. αminesinvisibleprimetasksinnon−freechoicebasedonpairsofevents,soitconsumesconsiderableprocessingtime.Inaddition,theinvisibletasksformationbyα mines invisible prime tasks in non-free choice based on pairs of events, so it consumes considerable processing time. In addition, the invisible tasks formation by α is limited to skip, switch, and redo conditions. This study proposes a graph-based algorithm named Graph Advanced Invisible Task in Non-free choice (GAITN) to form invisible tasks in non-free choice for stacked branching relationships condition and handle large event logs. GAITN partitions the event log and creates rules for merging the partitions to scale up the volume of discoverable events. Then, GAITN utilises rules of previous graph-based process mining algorithm to visualises branching relationships (XOR, OR, AND) and creates rules of mining invisible tasks in non-free choice based on obtained branching relationships. This study compared the performance of GAITN with that of Graph Invisible Task (GIT), α $, and Fodina and found that GAITN produces process models with better fitness, precision, generalisation, and simplicity measure based on higher number of events. GAITN significantly improves the quality of process model and scalability of process mining algorithm

    Approaching simulation to modelers: a user interface for large-scale demographic simulation

    No full text
    Extended version / Versió extesaAgent-based modeling is one of the promising modeling tools that can be used in the study of population dynamics. Two of the main obstacles hindering the use of agent-based simulation in practice are its scalability when the analysis requires large-scale models such as policy studies, and its ease-of-use especially for users with no programming experience. While there has been a significant work on the scalability issue, ease-of-use aspect has not been addressed in the same intensity. This paper presents a graphical user interface designed for a simulation tool which allows modelers with no programming background to specify agent-based demographic models and run them on parallel environments. The interface eases the definition of models to describe individual and group dynamics processes with both qualitative and quantitative data. The main advantage is to allow users to transparently run the models on high performance computing infrastructures

    Budget allocation of food procurement for natural disaster response

    No full text
    This paper studies a variant of the lot sizing problem that arises in the context of disaster management. In this problem, a fixed budget has to be allocated efficiently over multiple time periods to procure large quantities of a staple food that will be stored and later delivered to people affected by disaster strikes whose numbers are unknown in advance. Starting from the deterministic model where perfect information is assumed, different formulations to address the uncertainties are constructed: classical robust optimisation, risk-minimisation stochastic programming, and adjustable robust optimisation. Experiments conducted using data from West Java, Indonesia allow us to discuss the advantages and drawbacks of each method. Our methods constitute a toolbox to support decision makers with making procurement decisions and answering managerial questions such as which annual budget is fair and safe, or when storage peaks are likely to occur
    corecore